Dynamical multilayer neural networks that learn continuous trajectories
نویسندگان
چکیده
The feed-forward multilayer networks (perceptrons, radial basis function networks (RBF), probabilistic networks, etc.) are currently used as „static systems“ in pattern recognition, speech generation, identification and control, prediction, etc. (see, e. g. [1]). Theoretical works by several researchers, including [2] and [3] have proved that, even with one hidden layer, a perceptron neural network can uniformly approximate any continuous function over a compact domain, provided that this perceptron has a sufficient number of neurons in the hidden layer. Recently, interest has increased to apply the neural networks in learning continuous trajectories, identification and/or control of dynamical systems. In such a case it is natural to use the networks involving dynamical elements in the form of feedback connections which are known as recurrent neural networks. Although a huge amount of work and studies in the field of feed-forward neural networks has been done, only few of the results concern learning continuous trajectories by means of dynamical recurrent neural networks. Pearlmutter [4], [5], Werbos [6], Toomarian and Barhen [7] have developed an algorithm using gradient-descent-based methods which is known as back-propagation through time (BPTT); it learns time-varying external inputs and produces either desired temporal behaviours over a bounded time interval or trains non-fixed-point attractors. Williams and Zipser [8], Meert and Ludik [9] have constructed a gradient descent learning rule which they call real-time recurrent learning (RTRL) and which can deal with time sequences of arbitrary length. A stochastic search method based on an adaptive simulated annealing algorithm has been used by Cohen et al. [10] to efficiently train recurrent neural networks with time delays (TDRNN). An effort was made in the above investigation to implement several benchmark tasks using minimum size networks. In all the above investigations, a dynamical neural network evolves in accordance with the following general equations
منابع مشابه
Dimension Reduction of Biological Neuron Models by Artificial Neural Networks
An articial neural network approach for dimension reduction of dynamical systems is proposed and applied to conductance-based neuron models. Networks with bottleneck layers of continuous-time dynamical units could make a 2-dimensional model from the trajectories of the Hodgkin-Huxley model and a 3-dimensional model from the trajectories of a 6-dimensional bursting neuron model. Nullcline analys...
متن کاملA Local Algorithm to Learn Trajectories with Stochastic Neural Networks
This paper presents a simple algorithm to learn trajectories with a continuous time, continuous activation version of the Boltzmann machine. The algorithm takes advantage of intrinsic Brownian noise in the network to easily compute gradients using entirely local computations. The algorithm may be ideal for parallel hardware implementations. This paper presents a learning algorithm to train cont...
متن کاملSubmitted to Ieee Transactions on Neural Networks
We present a new approach to the modeling of continuous-time systems observed in discrete time. The rationale behind this approach is that discrete-time models of continuous-time systems ought to be invertible and that we therefore should investigate group theoretic methods: We use compositions of invertible basis transformations to approximate the (invertible) map of a dynamical system. The ap...
متن کاملمعرفی شبکه های عصبی پیمانه ای عمیق با ساختار فضایی-زمانی دوگانه جهت بهبود بازشناسی گفتار پیوسته فارسی
In this article, growable deep modular neural networks for continuous speech recognition are introduced. These networks can be grown to implement the spatio-temporal information of the frame sequences at their input layer as well as their labels at the output layer at the same time. The trained neural network with such double spatio-temporal association structure can learn the phonetic sequence...
متن کاملDesign of second order neural networks as dynamical control systems that aim to minimize nonconvex scalar functions
This article presents an unified way to design neural networks characterized as second order ordinary differential equations (ODE), that aim to find the global minimum of nonconvex scalar functions. These neural networks, alternative referred to as continuous time algorithms, are interpreted as dynamical closed loop control systems. The design is based on the control Liapunov function (CLF) met...
متن کامل